11 research outputs found
Deep Q-Learning for Self-Organizing Networks Fault Management and Radio Performance Improvement
We propose an algorithm to automate fault management in an outdoor cellular
network using deep reinforcement learning (RL) against wireless impairments.
This algorithm enables the cellular network cluster to self-heal by allowing RL
to learn how to improve the downlink signal to interference plus noise ratio
through exploration and exploitation of various alarm corrective actions. The
main contributions of this paper are to 1) introduce a deep RL-based fault
handling algorithm which self-organizing networks can implement in a polynomial
runtime and 2) show that this fault management method can improve the radio
link performance in a realistic network setup. Simulation results show that our
proposed algorithm learns an action sequence to clear alarms and improve the
performance in the cellular cluster better than existing algorithms, even
against the randomness of the network fault occurrences and user movements.Comment: (c) 2018 IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or
promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Partially Blind Handovers for mmWave New Radio Aided by Sub-6 GHz LTE Signaling
For a base station that supports cellular communications in sub-6 GHz LTE and
millimeter (mmWave) bands, we propose a supervised machine learning algorithm
to improve the success rate in the handover between the two radio frequencies
using sub-6 GHz and mmWave prior channel measurements within a temporal window.
The main contributions of our paper are to 1) introduce partially blind
handovers, 2) employ machine learning to perform handover success predictions
from sub-6 GHz to mmWave frequencies, and 3) show that this machine learning
based algorithm combined with partially blind handovers can improve the
handover success rate in a realistic network setup of colocated cells.
Simulation results show improvement in handover success rates for our proposed
algorithm compared to standard handover algorithms.Comment: (c) 2018 IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or
promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Uncoordinated Interference Avoidance Between Terrestrial and Non-Terrestrial Communications
This paper proposes an algorithm that uses geospatial analytics and the
muting of physical resources in next-generation base stations (BSs) to avoid
interference between cellular (or terrestrial) and satellite communication
systems as non-terrestrial systems. The information exchange between satellite
and terrestrial links is very limited, but a hybrid edge cloud node with access
to satellite trajectories can enable these BSs to take proactive measures. We
show simulation results to validate the superiority of our proposed algorithm
over a conventional method. Our algorithm runs in polynomial time, making it
suitable for real-time interference avoidance.Comment: 5 pages, 4 figures, submitted to IEEE Global Communications
Conference 202
Deep Learning for Multi-User Proactive Beam Handoff: A 6G Application
This paper demonstrates the use of deep learning and time series data
generated from user equipment (UE) beam measurements and positions collected by
the base station (BS) to enable handoffs between beams that belong to the same
or different BSs. We propose the use of long short-term memory (LSTM) recurrent
neural networks with three different approaches and vary the number of number
of lookbacks of the beam measurements to study the prediction accuracy.
Simulations show that at a sufficiently large number of lookbacks, the UE
positions become irrelevant to the prediction accuracy since the LSTMs are able
to learn the optimal beam based on implicitly defined positions from the
time-defined trajectories.Comment: 22 pages, 9 figures. Submitted to IEEE Transactions on Communication